24/7 Space News
ROBO SPACE
"Robot, make me a chair" robot-make-me-a-chair-in-six-prompts
illustration only

"Robot, make me a chair" robot-make-me-a-chair-in-six-prompts

by Andrew Paul Laurent | MIT Concrete Sustainability Hub
Boston MA (SPX) Dec 17, 2025

Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don't lend themselves to brainstorming or rapid prototyping.

In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words.

Their system uses a generative AI model to build a 3D representation of an object's geometry based on the user's prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object's function and geometry.

The system can automatically build the object from a set of prefabricated parts using robotic assembly. It can also iterate on the design based on feedback from the user.

The researchers used this end-to-end system to fabricate furniture, including chairs and shelves, from two types of premade components. The components can be disassembled and reassembled at will, reducing the amount of waste generated through the fabrication process.

They evaluated these designs through a user study and found that more than 90 percent of participants preferred the objects made by their AI-driven system, as compared to different approaches.

While this work is an initial demonstration, the framework could be especially useful for rapid prototyping complex objects like aerospace components and architectural objects. In the longer term, it could be used in homes to fabricate furniture or other objects locally, without the need to have bulky products shipped from a central facility.

"Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future," says lead author Alex Kyaw, a graduate student in the MIT departments of Electrical Engineering and Computer Science (EECS) and Architecture.

Kyaw is joined on the paper by Richa Gupta, an MIT architecture graduate student; Faez Ahmed, associate professor of mechanical engineering; Lawrence Sass, professor and chair of the Computation Group in the Department of Architecture; senior author Randall Davis, an EECS professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as others at Google Deepmind and Autodesk Research. The paper was recently presented at the Conference on Neural Information Processing Systems.

Generating a multicomponent design

While generative AI models are good at generating 3D representations, known as meshes, from text prompts, most do not produce uniform representations of an object's geometry that have the component-level details needed for robotic assembly.

Separating these meshes into components is challenging for a model because assigning components depends on the geometry and functionality of the object and its parts.

The researchers tackled these challenges using a vision-language model (VLM), a powerful generative AI model that has been pre-trained to understand images and text. They task the VLM with figuring out how two types of prefabricated parts, structural components and panel components, should fit together to form an object.

"There are many ways we can put panels on a physical object, but the robot needs to see the geometry and reason over that geometry to make a decision about it. By serving as both the eyes and brain of the robot, the VLM enables the robot to do this," Kyaw says.

A user prompts the system with text, perhaps by typing "make me a chair," and gives it an AI-generated image of a chair to start.

Then, the VLM reasons about the chair and determines where panel components go on top of structural components, based on the functionality of many example objects it has seen before. For instance, the model can determine that the seat and backrest should have panels to have surfaces for someone sitting and leaning on the chair.

It outputs this information as text, such as "seat" or "backrest." Each surface of the chair is then labeled with numbers, and the information is fed back to the VLM.

Then the VLM chooses the labels that correspond to the geometric parts of the chair that should receive panels on the 3D mesh to complete the design.

Human-AI co-design

The user remains in the loop throughout this process and can refine the design by giving the model a new prompt, such as "only use panels on the backrest, not the seat."

"The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible," Kyaw says.

"The human-in-the-loop process allows the users to steer the AI-generated designs and have a sense of ownership in the final result," adds Gupta.

Once the 3D mesh is finalized, a robotic assembly system builds the object using prefabricated parts. These reusable parts can be disassembled and reassembled into different configurations.

The researchers compared the results of their method with an algorithm that places panels on all horizontal surfaces that are facing up, and an algorithm that places panels randomly. In a user study, more than 90 percent of individuals preferred the designs made by their system.

They also asked the VLM to explain why it chose to put panels in those areas.

"We learned that the vision language model is able to understand some degree of the functional aspects of a chair, like leaning and sitting, to understand why it is placing panels on the seat and backrest. It isn't just randomly spitting out these assignments," Kyaw says.

In the future, the researchers want to enhance their system to handle more complex and nuanced user prompts, such as a table made out of glass and metal. In addition, they want to incorporate additional prefabricated components, such as gears, hinges, or other moving parts, so objects could have more functionality.

"Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner," says Davis.

Research Report:Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models

Related Links
Massachusetts Institute of Technology
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Bio-hybrid robots turn food waste into functional machines
Lausanne Switzerland (SPX) Dec 08, 2025
EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications. Although many roboticists today turn to nature to inspire their designs, even bioinspired robots are usually fabricated from non-biological materials like metal, plastic and composites. But a new experimental robotic manipulator from the Computational Robot Design and Fabrication Lab (CREATE Lab) in EPFL's School of Engineering tur ... read more

ROBO SPACE
ISS to change commanders before Soyuz crew leaves orbit

Lodestar Space wins SECP support to advance AI satellite awareness system

Micro nano robots aim to cut carbon buildup in closed life support systems

NASA extends ISS National Lab management contract through 2030

ROBO SPACE
Space shuttle design study maps path to breakthrough inventions

EU dismisses 'completely crazy statements' after Musk attack

Sea based rocket net recovery platform enters service for Chinese reusable launchers

EU hits Musk's X with 120-mn-euro fine, sparking US ire

ROBO SPACE
Martian butterfly crater reveals low angle impact and buried lava history

Martian sound study models acoustic signals in Jezero crater

Bacterial partnership offers pathway to produce Mars regolith bricks for future habitats

Chinese team runs long term Martian dust cycle simulation with GoMars model

ROBO SPACE
Wenchang spaceport hits record cadence with double-digit launches in 2025

China consolidates new commercial space regulator and industry roadmap

Beijing space lab targets orbital data centers for AI era

China supports private space firms to expand global reach

ROBO SPACE
Beyond Gravity positions new modular satellite platform for European LEO missions

Applied Aerospace and PCX create US flight and space hardware group

EIB launches Space TechEU finance program for European space sector

MDA Space plans C250 million senior unsecured note issue maturing 2030

ROBO SPACE
Light driven process prints biocompatible plastic electrodes

New quantum chemistry method to unlock secrets of advanced materials

Amazon says will invest $35bn in India by 2030

Working to eliminate barriers to adopting nuclear energy

ROBO SPACE
The bacteria that wont wake up found in spacecraft cleanrooms

NASA backs WHOI effort to read organic signals from ocean worlds

Subaru OASIS survey uncovers massive planet and brown dwarf

Supernova mixing traced as source of key life elements

ROBO SPACE
SwRI links Uranus radiation belt mystery to solar storm driven waves

Looking inside icy moons

Saturn moon mission planning shifts to flower constellation theory

Could these wacky warm Jupiters help astronomers solve the planet formation puzzle?

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.