There have been many advances in vision-language models (VLM) that can match natural language queries to objects in a visual scene. And researchers are experimenting with how these models can be ...
Axios on MSN
UT researchers develop sensitive robot hands
A new type of robotic hand developed at the University of Texas can grasp objects as fragile as a potato chip or a raspberry ...
Have you ever wondered what it would take to train a robot to walk, grasp objects, or navigate a cluttered room with the same ease as a human? For many, the idea of teaching robots these complex tasks ...
A new system helps robots navigate homes they’ve never seen before with a little help from open-source AI models. Robots are good at certain tasks. They’re great at picking up and moving objects, for ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results