Daily Technology
·22/12/2025
As artificial intelligence advances, humanoid robots are rapidly capturing global attention. Recent incidents have highlighted both their potential and the critical risks involved in their development. Below are several important industry trends emerging from recent reports and real-world applications.
AI-powered humanoid robots are demonstrating levels of strength that were not widely anticipated, leading to new safety concerns. Figure AI's Figure 02 robot, for example, was cited in a lawsuit for being strong enough during tests to potentially "fracture a human skull." This incident underscores the growing mechanical capabilities enabled by AI, which makes these robots more versatile for industrial or domestic environments. The trend is important because it pushes the boundaries of what robots can physically do, but also challenges standard safety protocols. Leading companies such as Figure AI and Engine AI are at the forefront, actively testing real-world applications but also grappling with the implications of usable force in human-robot interactions.
Public incidents and legal disputes are raising significant questions about the transparency of robot creators regarding their machines' abilities. For example, ambiguity remains around real-world outcomes of robot stunts such as Engine AI’s T800 knocking down its own CEO during a demonstration, making it difficult to assess the true capabilities—and potential dangers—of these systems. This trend matters as stakeholders increasingly demand clear, standardized reporting on robot strength, torque, and operational limits. Ensuring openness helps build public trust and informs necessary regulation. The push for regulatory standards is growing, with calls for industry-wide norms similar to those in automotive and aviation sectors.
Unlike familiar benchmarks for human or machine strength, determining a humanoid robot’s true power is complex. Metrics like max torque or payload don’t easily translate to how much force a robot might exert in a dynamic real-world scenario. In real-world demonstrations, such as those shared by Engine AI and Figure AI, the nuances of applied force—and their potential consequences—are often lost on general audiences. This trend is significant for both investors and practitioners, highlighting the need for clearer industry metrics, safety testing, and third-party validation to accurately assess risks as robots enter public and workplace settings.
Lawsuits like the one faced by Figure AI signal an evolving legal and ethical landscape. As humanoid robots grow in strength and autonomy, companies must address not only technical development but also liability for harm and ethical responsibility to users. Disputes over workplace safety, transparency, and whistleblower retaliation are increasingly likely, as seen with Figure AI and other firms. These legal precedents will play a significant role in shaping company policies, insurance products, and regulatory compliance frameworks.
In summary, the intersection of rapid technical progress, heightened transparency demands, evolving legal standards, and the challenge of robust safety measurement define the present and near-future of AI humanoid robotics. Industry stakeholders should monitor both technical and regulatory developments to maintain a balanced approach to opportunity and risk.









